-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-40876][SQL] Widening type promotions in Parquet readers #44368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-40876][SQL] Widening type promotions in Parquet readers #44368
Conversation
2a640f4 to
a4e8048
Compare
6ddb60b to
c113d8e
Compare
c113d8e to
cb1487e
Compare
e1c73e7 to
dc8b489
Compare
| boolean failIfRebase = "EXCEPTION".equals(datetimeRebaseMode); | ||
| return new IntegerWithRebaseUpdater(failIfRebase); | ||
| } | ||
| } else if (sparkType == DataTypes.TimestampNTZType) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should support timestamp ltz as well, which is DataTypes.TimestmapType
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but this means the parquet reader needs to know the session timezone.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW, how do we know this INT32 is a logical DATE in parquet?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a check to only allow reading INT32 with a date annotation.
I took a stab at Date->TimestampLTZ but it's not trivial and we would need to discuss the expected behavior, I can follow up in a different PR if we decide we need it. It's easy to get wrong and I'd rather disallow it for now.
...ain/java/org/apache/spark/sql/execution/datasources/parquet/ParquetVectorUpdaterFactory.java
Outdated
Show resolved
Hide resolved
...ain/java/org/apache/spark/sql/execution/datasources/parquet/ParquetVectorUpdaterFactory.java
Outdated
Show resolved
Hide resolved
...src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedColumnReader.java
Outdated
Show resolved
Hide resolved
...src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedColumnReader.java
Outdated
Show resolved
Hide resolved
...src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedColumnReader.java
Outdated
Show resolved
Hide resolved
...src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedColumnReader.java
Outdated
Show resolved
Hide resolved
...src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedColumnReader.java
Outdated
Show resolved
Hide resolved
...src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedColumnReader.java
Show resolved
Hide resolved
...src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedColumnReader.java
Show resolved
Hide resolved
| } | ||
| } | ||
|
|
||
| test("SPARK-35640: int as long should throw schema incompatible error") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also cc @sunchao
Will this pr solve the problem described in SPARK-35461 too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this will address SPARK-35461 - covered by this test case in particular: https://github.com/apache/spark/pull/44368/files#diff-a5cfd7285f9adf95b2aeea90aa57cc35d2b8c6bddaa0f4652172d30a264d3614R153
|
@cloud-fan Can you merge this PR? |
| return new IntegerWithRebaseUpdater(failIfRebase); | ||
| } | ||
| } else if (sparkType == DataTypes.TimestampNTZType && isDateTypeMatched(descriptor)) { | ||
| if ("CORRECTED".equals(datetimeRebaseMode)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See #44428 , ntz should never rebase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nvm, this is date, and we need rebase
| boolean needsUpcast = sparkType == LongType || (isDate && sparkType == TimestampNTZType) || | ||
| !DecimalType.is32BitDecimalType(sparkType); | ||
| boolean needsRebase = logicalTypeAnnotation instanceof DateLogicalTypeAnnotation && | ||
| !"CORRECTED".equals(datetimeRebaseMode); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
| parquetType.getLogicalTypeAnnotation.isInstanceOf[DateLogicalTypeAnnotation] => | ||
| new ParquetPrimitiveConverter(updater) { | ||
| override def addInt(value: Int): Unit = { | ||
| this.updater.set(DateTimeUtils.daysToMicros(dateRebaseFunc(value), ZoneOffset.UTC)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
|
thanks, merging to master! |
|
There are 3 test failed in
|
|
@LuciferYang thanks for catching it! Does it block PR merging? We may need to wait for a few days as it's the holiday season. If you can fix it then it's even better. We can revert it first if it affects other PRs. |
|
@cloud-fan This is not a serious issue, just ansi mode daily test failed, I don't think we need to revert. |
Just marking this, the issue has been fixed by #44481 |
What changes were proposed in this pull request?
This change adds the following conversions to the vectorized and non-vectorized Parquet readers corresponding to type promotions that are strictly widening without precision loss:
Why are the changes needed?
These type promotions support two similar use cases:
The second use case in particular will enable widening the type of columns or fields in existing Delta tables.
Does this PR introduce any user-facing change?
The following fails before this change:
With the Int->Long promotion in both the vectorized and non-vectorized parquet readers, it succeeds and produces correct results, without overflow or loss of precision.
The same is true for Float->Double, Int->Double, Decimal with higher precision and Date->TimestampNTZ
How was this patch tested?
ParquetTypeWideningSuitecovering the promotions included in this change, in particular:LEGACYandCORRECTEDfor Date -> TimestampNTZINT32,INT64,BINARY,FIXED_LEN_BYTE_ARRAYWas this patch authored or co-authored using generative AI tooling?
No